positive action
The Neutrality Fallacy: When Algorithmic Fairness Interventions are (Not) Positive Action
Weerts, Hilde, Xenidis, Raphaële, Tarissan, Fabien, Olsen, Henrik Palmer, Pechenizkiy, Mykola
Various metrics and interventions have been developed to identify and mitigate unfair outputs of machine learning systems. While individuals and organizations have an obligation to avoid discrimination, the use of fairness-aware machine learning interventions has also been described as amounting to 'algorithmic positive action' under European Union (EU) non-discrimination law. As the Court of Justice of the European Union has been strict when it comes to assessing the lawfulness of positive action, this would impose a significant legal burden on those wishing to implement fair-ml interventions. In this paper, we propose that algorithmic fairness interventions often should be interpreted as a means to prevent discrimination, rather than a measure of positive action. Specifically, we suggest that this category mistake can often be attributed to neutrality fallacies: faulty assumptions regarding the neutrality of fairness-aware algorithmic decision-making. Our findings raise the question of whether a negative obligation to refrain from discrimination is sufficient in the context of algorithmic decision-making. Consequently, we suggest moving away from a duty to 'not do harm' towards a positive obligation to actively 'do no harm' as a more adequate framework for algorithmic decision-making and fair ml-interventions.
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.05)
- Europe > France (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- (12 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Banking & Finance (0.93)
- Health & Medicine > Therapeutic Area > Oncology (0.67)
- (3 more...)
Liability and Risk in Programming Autonomous Vehicles - CPO Magazine
Many readers will remember the Knight Industries Two Thousand (or KITT) from the 1980s – the fascinating self-aware car driven by the Knight Rider – the Hoff. That car was programmed using sophisticated Artificial Intelligence and machine learning. The best science fiction (and it was science fiction in the 1980s) should become science fact shortly afterwards. It is that time because autonomous vehicles are shortly to storm the world stage. We are about to undergo a paradigm shift from passive response-based systems in cars (such as cruise control, lane-change warning alarms, obstacle alarms and so on) to fully active systems.
- Europe > Middle East (0.05)
- Asia > Middle East (0.05)
- Africa > Middle East (0.05)
Pinnability: Machine learning in the home feed
The home feed, a collection of Pins from the people, boards and interests followed, as well as recommendations including Picked for You, is the most heavily user-engaged part of the service, and contributes a large fraction of total repins. The more people Pin, the better Pinterest can get for each person, which puts us in a unique position to serve up inspiration as a discovery engine on an ongoing basis. The home feed is a key way to discover new content, which is valuable to the Pinner, but poses a challenging question. Given the ever increasing number of Pins from various sources, how can we surface the most personalized and relevant Pins? Pinnability is the collective name of the machine learning models we developed to help Pinners find the best content in their home feed.